Goto

Collaborating Authors

 Cardiff


Love in the Time of A.I. Companions

The New Yorker

Some people now have an A.I. bestie. One user said, of her A.I. husband, "When he proposed, I thought, Oh, that's really crazy. I would be really crazy to accept." Adrianne Brookins is, by her own account, an "old soul," an "introvert," and a "big nerd." She is thirty-four years old, has a faint Texas accent and delicate features, and carries herself in a way that suggests she's trying not to take up space. Brookins is a lifelong resident of San Antonio; her family has lived there since the nineteenth century. She was "born and raised in the Church," a Baptist congregation where her mother helped start a day-care center and her father was an organist. "He would open up the pipes and just make the building shake," she recalled recently. She met her husband in high school, and married him in 2011; the following year, they had a son. Throughout her twenties, Brookins worked multiple jobs, including one at her mother's day care. The couple bought a house and began settling into family life. In 2016, Brookins became pregnant again, this time with a girl. The family was excited: Brookins had grown up with four brothers, and the baby would be the first granddaughter on either side. They decided to name her Desirae. The following spring, Desirae was delivered stillborn. "When I came home, my son, who was about four or five at the time, walked up to me and said, 'What happened to your stomach? Where's the baby?' " she told me. "I had nothing to show for it." At the funeral, the gravedigger told the family he had never seen such a small casket. Brookins attended support groups and therapy, but they did little to alleviate her grief. "I felt like I was just living it over and over," she said. She left her job at the day care, finding it too triggering to be around infants. Friends and family encouraged her to move on. Brookins's husband was working sixty-hour weeks, balancing a career in the military with a job as a training manager for Pizza Hut. He was reluctant to talk about Desirae. Brookins tried to find solace in the Church, but other congregants told her that her daughter's death was part of God's plan.


Auslan-Daily: Australian Sign Language Translation for Daily Communication and News

Neural Information Processing Systems

Considering different geographic regions generally have their own native sign languages, it is valuable to establish corresponding SL T datasets to support related communication and research. Auslan, as a sign language specific to Australia, still lacks a dedicated large-scale dataset for SL T.



MM-WLAuslan: Multi-View Multi-Modal Word-Level Australian Sign Language Recognition Dataset

Neural Information Processing Systems

Considering the diversity of sign languages across geographical regions, developing region-specific ISLR datasets is crucial for supporting communication and research. Auslan, as a sign language specific to Australia, still lacks a dedicated large-scale word-level dataset for the ISLR task.






BondBERT: What we learn when assigning sentiment in the bond market

Barter, Toby, Gao, Zheng, Christodoulaki, Eva, Chen, Jing, Cartlidge, John

arXiv.org Artificial Intelligence

Bond markets respond differently to macroeconomic news compared to equity markets, yet most sentiment models are trained primarily on general financial or equity news data. However, bond prices often move in the opposite direction to economic optimism, making general or equity-based sentiment tools potentially misleading. We introduce BondBERT, a transformer-based language model fine-tuned on bond-specific news. BondBERT can act as the perception and reasoning component of a financial decision-support agent, providing sentiment signals that integrate with forecasting models. We propose a generalisable framework for adapting transformers to low-volatility, domain-inverse sentiment tasks by compiling and cleaning 30,000 UK bond market articles (2018-2025). BondBERT's sentiment predictions are compared against FinBERT, FinGPT, and Instruct-FinGPT using event-based correlation, up/down accuracy analyses, and LSTM forecasting across ten UK sovereign bonds. We find that BondBERT consistently produces positive correlations with bond returns, and achieves higher alignment and forecasting accuracy than the three baseline models. These results demonstrate that domain-specific sentiment adaptation better captures fixed income dynamics, bridging a gap between NLP advances and bond market analytics.


Extracting Disaster Impacts and Impact Related Locations in Social Media Posts Using Large Language Models

Hameed, Sameeah Noreen, Ranathunga, Surangika, Prasanna, Raj, Stock, Kristin, Jones, Christopher B.

arXiv.org Artificial Intelligence

Large-scale disasters can often result in catastrophic consequences on people and infrastructure. Situation awareness about such disaster impacts generated by authoritative data from in-situ sensors, remote sensing imagery, and/or geographic data is often limited due to atmospheric opacity, satellite revisits, and time limitations. This often results in geo-temporal information gaps. In contrast, impact-related social media posts can act as "geo-sensors" during a disaster, where people describe specific impacts and locations. However, not all locations mentioned in disaster-related social media posts relate to an impact. Only the impacted locations are critical for directing resources effectively. e.g., "The death toll from a fire which ripped through the Greek coastal town of #Mati stood at 80, with dozens of people unaccounted for as forensic experts tried to identify victims who were burned alive #Greecefires #AthensFires #Athens #Greece." contains impacted location "Mati" and non-impacted locations "Greece" and "Athens". This research uses Large Language Models (LLMs) to identify all locations, impacts and impacted locations mentioned in disaster-related social media posts. In the process, LLMs are fine-tuned to identify only impacts and impacted locations (as distinct from other, non-impacted locations), including locations mentioned in informal expressions, abbreviations, and short forms. Our fine-tuned model demonstrates efficacy, achieving an F1-score of 0.69 for impact and 0.74 for impacted location extraction, substantially outperforming the pre-trained baseline. These robust results confirm the potential of fine-tuned language models to offer a scalable solution for timely decision-making in resource allocation, situational awareness, and post-disaster recovery planning for responders.